Skip to content

engine/schema: fix new systemvm template is not registered during upgrade if hypervisor is not KVM#12952

Draft
weizhouapache wants to merge 3 commits intoapache:4.20from
weizhouapache:4.20-fix-missing-new-systemvm-template
Draft

engine/schema: fix new systemvm template is not registered during upgrade if hypervisor is not KVM#12952
weizhouapache wants to merge 3 commits intoapache:4.20from
weizhouapache:4.20-fix-missing-new-systemvm-template

Conversation

@weizhouapache
Copy link
Copy Markdown
Member

Description

This PR fixes the issue that the new systemvm template is not registered for non-KVM hypervisor types

Types of changes

  • Breaking change (fix or feature that would cause existing functionality to change)
  • New feature (non-breaking change which adds functionality)
  • Bug fix (non-breaking change which fixes an issue)
  • Enhancement (improves an existing feature and functionality)
  • Cleanup (Code refactoring and cleanup, that may add test cases)
  • Build/CI
  • Test (unit or integration test code)

Feature/Enhancement Scale or Bug Severity

Feature/Enhancement Scale

  • Major
  • Minor

Bug Severity

  • BLOCKER
  • Critical
  • Major
  • Minor
  • Trivial

Screenshots (if appropriate):

How Has This Been Tested?

How did you try to break this feature and the system with this change?

@weizhouapache
Copy link
Copy Markdown
Member Author

@blueorangutan package

@blueorangutan
Copy link
Copy Markdown

@weizhouapache a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link
Copy Markdown

Packaging result [SF]: ✖️ el8 ✖️ el9 ✖️ debian ✖️ suse15. SL-JID 17343

@weizhouapache
Copy link
Copy Markdown
Member Author

@blueorangutan package

@blueorangutan
Copy link
Copy Markdown

@weizhouapache a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

new Pair<>(Hypervisor.HypervisorType.XenServer, CPU.CPUArch.getDefault()),
new Pair<>(Hypervisor.HypervisorType.Hyperv, CPU.CPUArch.getDefault()),
new Pair<>(Hypervisor.HypervisorType.LXC, CPU.CPUArch.getDefault()),
new Pair<>(Hypervisor.HypervisorType.Ovm3, CPU.CPUArch.getDefault())
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@weizhouapache when arch type is null, it's defaulted to amd64, is null arch issue here?

protected static String getHypervisorArchKey(Hypervisor.HypervisorType hypervisorType, CPU.CPUArch arch) {
if (Hypervisor.HypervisorType.KVM.equals(hypervisorType)) {
return String.format("%s-%s", hypervisorType.name().toLowerCase(),
arch == null ? CPU.CPUArch.amd64.getType() : arch.getType());
}
return hypervisorType.name().toLowerCase();
}

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also, make note of the changes in the below PR in main (no need of forward merging the changes here to main)

https://github.com/apache/cloudstack/pull/11656/changes#diff-a9c9a38684718059c060de404bf9529de96e502f1e81b30793e6a32f725042a9

Copy link
Copy Markdown
Member Author

@weizhouapache weizhouapache Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sureshanaparti
in another place

the arch of MetadataTemplateDetails is from hypervisorType, which is null for non-KVM

            NewTemplateMap.put(key, new MetadataTemplateDetails(
                    hypervisorType.first(),
                    section.get("templatename"),
                    section.get("filename"),
                    section.get("downloadurl"),
                    section.get("checksum"),
                    hypervisorType.second(),
                    section.get("guestos")));

but in hypervisorsInUse, the arch is amd64/x86_64 (default arch of clusters)

hypervisorsInUse = clusterDao.listDistinctHypervisorsArchAcrossClusters(null);

cluster arch

mysql> SELECT DISTINCT hypervisor_type,arch from cluster;
+-----------------+--------+
| hypervisor_type | arch   |
+-----------------+--------+
| VMware          | x86_64 |
+-----------------+--------+
1 row in set (0.00 sec)

so the check always return false (line 1034)

boolean isHypervisorArchMatchMetadata = hypervisorsInUse.stream()
.anyMatch(p -> p.first().equals(templateDetails.getHypervisorType())
&& Objects.equals(p.second(), templateDetails.getArch()));

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also, make note of the changes in the below PR in main (no need of forward merging the changes here to main)

https://github.com/apache/cloudstack/pull/11656/changes#diff-a9c9a38684718059c060de404bf9529de96e502f1e81b30793e6a32f725042a9

good point @sureshanaparti
so the issue should not appear in 4.22, great

Copy link
Copy Markdown
Contributor

@sureshanaparti sureshanaparti Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok @weizhouapache, how about replacing hypervisorType.second() == null ? CPU.CPUArch.amd64.getType() : hypervisorType.second().getType() at

maybe, move it to method and use the call same method here and in getHypervisorArchKey()

private String getCPUArchType(CPU.CPUArch arch) {
    if (arch == null) {
        return CPU.CPUArch.amd64.getType();
    }
    return arch.getType();
}

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 2, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 16.26%. Comparing base (e2497cf) to head (57e1850).

Additional details and impacted files
@@            Coverage Diff            @@
##               4.20   #12952   +/-   ##
=========================================
  Coverage     16.26%   16.26%           
- Complexity    13433    13434    +1     
=========================================
  Files          5665     5665           
  Lines        500530   500530           
  Branches      60787    60787           
=========================================
+ Hits          81395    81417   +22     
+ Misses       410044   410021   -23     
- Partials       9091     9092    +1     
Flag Coverage Δ
uitests 4.15% <ø> (ø)
unittests 17.12% <ø> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@blueorangutan
Copy link
Copy Markdown

Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ el10 ✔️ debian ✔️ suse15. SL-JID 17344

@weizhouapache
Copy link
Copy Markdown
Member Author

@blueorangutan test

@blueorangutan
Copy link
Copy Markdown

@weizhouapache a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests

Copy link
Copy Markdown
Contributor

@shwstppr shwstppr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

code lgtm

@blueorangutan
Copy link
Copy Markdown

[SF] Trillian test result (tid-15809)
Environment: kvm-ol8 (x2), zone: Advanced Networking with Mgmt server ol8
Total time taken: 54910 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr12952-t15809-kvm-ol8.zip
Smoke tests completed. 141 look OK, 0 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File

@abh1sar abh1sar added this to the 4.20.3 milestone Apr 3, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants